22 research outputs found

    Algorithms for approximate Bayesian inference with applications to astronomical data analysis

    Get PDF
    Bayesian inference is a theoretically well-founded and conceptually simple approach to data analysis. The computations in practical problems are anything but simple though, and thus approximations are almost always a necessity. The topic of this thesis is approximate Bayesian inference and its applications in three intertwined problem domains. Variational Bayesian learning is one type of approximate inference. Its main advantage is its computational efficiency compared to the much applied sampling based methods. Its main disadvantage, on the other hand, is the large amount of analytical work required to derive the necessary components for the algorithm. One part of this thesis reports on an effort to automate variational Bayesian learning of a certain class of models. The second part of the thesis is concerned with heteroscedastic modelling which is synonymous to variance modelling. Heteroscedastic models are particularly suitable for the Bayesian treatment as many of the traditional estimation methods do not produce satisfactory results for them. In the thesis, variance models and algorithms for estimating them are studied in two different contexts: in source separation and in regression. Astronomical applications constitute the third part of the thesis. Two problems are posed. One is concerned with the separation of stellar subpopulation spectra from observed galaxy spectra; the other is concerned with estimating the time-delays in gravitational lensing. Solutions to both of these problems are presented, which heavily rely on the machinery of approximate inference

    Uncovering delayed patterns in noisy and irregularly sampled time series: an astronomy application

    Full text link
    We study the problem of estimating the time delay between two signals representing delayed, irregularly sampled and noisy versions of the same underlying pattern. We propose and demonstrate an evolutionary algorithm for the (hyper)parameter estimation of a kernel-based technique in the context of an astronomical problem, namely estimating the time delay between two gravitationally lensed signals from a distant quasar. Mixed types (integer and real) are used to represent variables within the evolutionary algorithm. We test the algorithm on several artificial data sets, and also on real astronomical observations of quasar Q0957+561. By carrying out a statistical analysis of the results we present a detailed comparison of our method with the most popular methods for time delay estimation in astrophysics. Our method yields more accurate and more stable time delay estimates: for Q0957+561, we obtain 419.6 days for the time delay between images A and B. Our methodology can be readily applied to current state-of-the-art optical monitoring data in astronomy, but can also be applied in other disciplines involving similar time series data.Comment: 36 pages, 10 figures, 16 tables, accepted for publication in Pattern Recognition. This is a shortened version of the article: interested readers are urged to refer to the published versio

    A Variational EM Approach to Predicting Uncertainty in Supervised Learning

    No full text
    In many applications of supervised learning, the conditional average of the target variables is not sufficient for prediction. The dependencies between the explanatory variables and the target variables can be much more complex calling for modelling the full conditional probability density. The ubiquitous problem with such methods is overfitting since due to the flexibility of the model the likelihood of any datapoint can be made arbitrarily large. In this paper a method for predicting uncertainty by modelling the conditional density is presented based on conditioning the scale parameter of the noise process on the explanatory variables. The regularisation problems are solved by learning the model using variational EM. Results with synthetic data show that the approach works well and experiments with real-world environmental data are promising

    Kuvasekvenssien hierarkkiset varianssimallit

    No full text
    Ohjaamattomaan oppimiseen perustuvat kuvasekvenssien mallit tuottavat yleensä yksinkertaisia piirteitä kuten reunasuotimia. Nämä yksinkertaiset piirteet eivät tarjoa kovinkaan korkean tason informaatiota kuvasekvenssistä. Yhdistämällä näiden tuottamaa informaatiota on kuitenkin mahdollista irrottaa mielekkäämpiä piirteitä datasta. Tilastollisten mallien ennustamat arvot ovat yleensä taustalla olevien todennäköisyysjakaumien odotusarvoja. Korkeamman kertaluvun statistiikat jätetään huomiotta. Varianssi kuvaa todennäköisyysjakauman hajontaa sen keskiarvosta. Varianssien estimointi yhdessä odotusarvojen kanssa on hankalaa ja yleensä sitä ei juurikaan tehdä. Kuitenkin on hyvin tiedossa, että monissa datajoukoissa varianssi sisältää paljon informaatiota, jota ei saada irrotettua pelkkiä keskiarvoja mallintamalla. Tässä työssä oleellinen kysymys on, saavutetaanko varianssien mallintamisella kuvasekvensseissä jotain hyödyllistä tavallisiin malleihin verrattuna. Työssä näytetään, että näin todellakin on ja rakennetaan eräs variansseja hyödyntävä hierarkkinen malli. Myös opetusalgoritmi, mukaanlukien lokaalit päivityssäännöt ja globaalit alustusskeemat, esitellään. Perusmenetelmänä sovelletaan bayesiläistä variaatio-oppimista, joka on osoittautunut luotettavaksi menetelmäksi vaikeidenkin ongelmien ratkaisemiseen. Mallia kokeillaan keinotekoisella datalla, millä pyritään osoittamaan, että opetusalgoritmi toimii. Simulaatiot luonnollisesta näkymästä tuotetulla kuvasekvenssillä osoittavat, että algoritmi toimii myös realistisemmalla datalla

    and A Kabán, Variational Learning for Rectified Factor Analysis

    No full text
    Linear factor models with non-negativity constraints have received a great deal of interest in a number of problem domains. In existing approaches, positivity has often been associated with sparsity. In this paper we argue that sparsity of the factors is not always a desirable option, but certainly a technical limitation of the currently existing solutions. We then reformulate the problem in order to relax the sparsity constraint while retaining positivity. This is achieved by employing a rectification nonlinearity rather than a positively supported prior directly on the latent space. A variational learning procedure is derived for the proposed model and this is contrasted to existing related approaches. Both i.i.d. and first-order AR variants of the proposed model are provided and they are experimentally demonstrated with artificial data. Application to the analysis of galaxy spectra show the benefits of the method in a real world astrophysical problem, where the existing approach is not a viable alternative

    Hierarchical Models of Variance Sources

    No full text
    In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is di#cult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which is sensitive to probability mass instead of density is able to jointly model both variances and means. We consider a model structure where a Gaussian variable, called variance node, controls the variance of another Gaussian variable. Variance nodes make it possible to build hierarchical models for both variances and means. We report experiments with artificial data which demonstrate the ability of the learning algorithm to find variance sources explaining and characterizing well the variances in the multidimensional data. Experiments with biomedical MEG data show that variance sources are present in real-world signals

    Building Blocks For Variational Bayesian Learning Of Latent Variable Models

    No full text
    We introduce standardised building blocks designed to be used with variational Bayesian learning. The blocks include Gaussian variables, summation, multiplication, nonlinearity, and delay. A large variety of latent variable models can be constructed from these blocks, including variance models and nonlinear modelling, which are lacking from most existing variational systems. The introduced blocks are designed to fit together and to yield e#cient update rules. Practical implementation of various models is easy thanks to an associated software package which derives the learning formulas automatically once a specific model structure has been fixed. Variational Bayesian learning provides a cost function which is used both for updating the variables of the model and for optimising the model structure. All the computations can be carried out locally, resulting in linear computational complexity. We presen
    corecore